Deep neural networks for rotation-invariance approximation and learning
نویسندگان
چکیده
منابع مشابه
Provable approximation properties for deep neural networks
We discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold Γ ⊂ R, we construct a sparsely-connected depth-4 neural network and bound its error in approximating f . The size of the network depends on dimension and curvature of the manifold Γ, the complexity of f , in terms of its wavelet description, and only weakly on the ambient dimension m. Es...
متن کاملWhy Deep Neural Networks for Function Approximation?
Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, ...
متن کاملRotation Invariance Neural Network
Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap ...
متن کاملFast algorithms for learning deep neural networks
With the increase in computation power and data availability in recent times, machine learning and statistics have seen an enormous development and widespread application in areas such as computer vision, computational biology and others. A focus of current research are deep neural nets: nested functions consisting of a hierarchy of layers of thousands of weights and nonlinear, hidden units. Th...
متن کاملBayesian Incremental Learning for Deep Neural Networks
In industrial machine learning pipelines, data often arrive in parts. Particularly in the case of deep neural networks, it may be too expensive to train the model from scratch each time, so one would rather use a previously learned model and the new data to improve performance. However, deep neural networks are prone to getting stuck in a suboptimal solution when trained on only new data as com...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Analysis and Applications
سال: 2019
ISSN: 0219-5305,1793-6861
DOI: 10.1142/s0219530519400074